83 research outputs found

    Eco Global Evaluation: Cross Benefits of Economic and Ecological Evaluation

    Get PDF
    This paper highlights the complementarities of cost and environmental evaluation in a sustainable approach. Starting with the needs and limits for whole product lifecycle evaluation, this paper begins with the modeling, data capture and performance indicator aspects. In a second step, the information issue, regarding the whole lifecycle of the product is addressed. In order to go further than the economical evaluations/assessment, the value concept (for a product or a service) is discussed. Value could combine functional requirements, cost objectives and environmental impact. Finally, knowledge issues which address the complexity of integrating multi-disciplinary expertise to the whole lifecycle of a product are discussing.EcoSD NetworkEcoSD networ

    reusing analysis schemas in odb applications a chart based approach

    Get PDF
    This paper presents a method for creating, indexing and reusing analysis schemas in developing Object-oriented Data-base (ODB) applications. Analysis schemas are specified by using analysis charts, a user-oriented set of forms structured according to the TQL++ Object-oriented specification model, and are classified according to their structural characteristics and content. A set of analysis charts forms a reusable schema, referred to as an analysis stack. The developer can retrieve and examine stacks by accessing analysis charts containing relevant entity names and structures. Charts are connected by links reproducing TQL++ relationships and connecting 'similar' schemas. The paper presents the measures of similarity between charts and describes the organization of charts in a reuse repository. A Thesaurus of relevant terms and synonyms is coupled with the repository. The Thesaurus and the repository are the basis for guiding developers in deriving new ODB applications through a sequence of steps proposed by a CHarting and Analysis for Reuse Tool (CHART). The methodology for reusing analysis schemas, based on navigation in the repository, and the support tool are described

    Structuring and extracting knowledge for the support of hypothesis generation in molecular biology

    Get PDF
    Background: Hypothesis generation in molecular and cellular biology is an empirical process in which knowledge derived from prior experiments is distilled into a comprehensible model. The requirement of automated support is exemplified by the difficulty of considering all relevant facts that are contained in the millions of documents available from PubMed. Semantic Web provides tools for sharing prior knowledge, while information retrieval and information extraction techniques enable its extraction from literature. Their combination makes prior knowledge available for computational analysis and inference. While some tools provide complete solutions that limit the control over the modeling and extraction processes, we seek a methodology that supports control by the experimenter over these critical processes. Results: We describe progress towards automated support for the generation of biomolecular hypotheses. Semantic Web technologies are used to structure and store knowledge, while a workflow extracts knowledge from text. We designed minimal proto-ontologies in OWL for capturing different aspects of a text mining experiment: the biological hypothesis, text and documents, text mining, and workflow provenance. The models fit a methodology that allows focus on the requirements of a single experiment while supporting reuse and posterior analysis of extracted knowledge from multiple experiments. Our workflow is composed of services from the 'Adaptive Information Disclosure Application' (AIDA) toolkit as well as a few others. The output is a semantic model with putative biological relations, with each relation linked to the corresponding evidence. Conclusion: We demonstrated a 'do-it-yourself' approach for structuring and extracting knowledge in the context of experimental research on biomolecular mechanisms. The methodology can be used to bootstrap the construction of semantically rich biological models using the results of knowledge extraction processes. Models specific to particular experiments can be constructed that, in turn, link with other semantic models, creating a web of knowledge that spans experiments. Mapping mechanisms can link to other knowledge resources such as OBO ontologies or SKOS vocabularies. AIDA Web Services can be used to design personalized knowledge extraction procedures. In our example experiment, we found three proteins (NF-Kappa B, p21, and Bax) potentially playing a role in the interplay between nutrients and epigenetic gene regulation

    Benchmarking Ontologies: Bigger or Better?

    Get PDF
    A scientific ontology is a formal representation of knowledge within a domain, typically including central concepts, their properties, and relations. With the rise of computers and high-throughput data collection, ontologies have become essential to data mining and sharing across communities in the biomedical sciences. Powerful approaches exist for testing the internal consistency of an ontology, but not for assessing the fidelity of its domain representation. We introduce a family of metrics that describe the breadth and depth with which an ontology represents its knowledge domain. We then test these metrics using (1) four of the most common medical ontologies with respect to a corpus of medical documents and (2) seven of the most popular English thesauri with respect to three corpora that sample language from medicine, news, and novels. Here we show that our approach captures the quality of ontological representation and guides efforts to narrow the breach between ontology and collective discourse within a domain. Our results also demonstrate key features of medical ontologies, English thesauri, and discourse from different domains. Medical ontologies have a small intersection, as do English thesauri. Moreover, dialects characteristic of distinct domains vary strikingly as many of the same words are used quite differently in medicine, news, and novels. As ontologies are intended to mirror the state of knowledge, our methods to tighten the fit between ontology and domain will increase their relevance for new areas of biomedical science and improve the accuracy and power of inferences computed across them
    corecore